Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update gradients module to stop mutating operators in-place #4220

Merged
merged 35 commits into from
Jul 11, 2023

Conversation

mudit2812
Copy link
Contributor

@mudit2812 mudit2812 commented Jun 6, 2023

Context:
In-place mutation of operators is being removed.

Description of the Change:

  • Update generate_shifted_tapes and generate_multishifted_tapes to stop mutating operators in-place and use bind_new_parameters instead.

Benefits:
This change impacts parameter-shift, finite-diff, and spsa grad methods, which see comparable or better performance after removing in-place mutation.

Possible Drawbacks:

Related GitHub Issues:

@mudit2812
Copy link
Contributor Author

[sc-36756]

@codecov
Copy link

codecov bot commented Jun 14, 2023

Codecov Report

Merging #4220 (084f515) into master (0f2cf7d) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master    #4220   +/-   ##
=======================================
  Coverage   99.79%   99.79%           
=======================================
  Files         351      351           
  Lines       32058    32111   +53     
=======================================
+ Hits        31991    32044   +53     
  Misses         67       67           
Impacted Files Coverage Δ
pennylane/devices/experimental/device_api.py 98.00% <ø> (ø)
pennylane/gradients/hadamard_gradient.py 100.00% <ø> (ø)
pennylane/gradients/jvp.py 100.00% <ø> (ø)
pennylane/gradients/parameter_shift.py 100.00% <ø> (ø)
pennylane/templates/layers/cv_neural_net.py 100.00% <ø> (ø)
pennylane/devices/default_gaussian.py 100.00% <100.00%> (ø)
pennylane/gradients/general_shift_rules.py 100.00% <100.00%> (ø)
pennylane/ops/cv.py 100.00% <100.00%> (ø)
pennylane/ops/functions/bind_new_parameters.py 100.00% <100.00%> (ø)
pennylane/qinfo/transforms.py 100.00% <100.00%> (ø)
... and 6 more

@mudit2812
Copy link
Contributor Author

Will open for review after running benchmarks

@mudit2812 mudit2812 removed this from the v0.31 milestone Jun 15, 2023
@mudit2812
Copy link
Contributor Author

mudit2812 commented Jul 4, 2023

Ran an optimization workflow on a circuit with StronglyEntanglingLayers with 4 layers, 4 wires and 30 steps. Performance for both parameter-shift and finite-diff is significantly better, and spsa has comparable but slightly slower performance. Check out the benchmark here. Plot of comparison is below.

image

@mudit2812 mudit2812 marked this pull request as ready for review July 4, 2023 20:43
@mudit2812 mudit2812 added the WIP 🚧 Work-in-progress label Jul 4, 2023
@mudit2812 mudit2812 requested a review from a team July 6, 2023 21:13
doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
pennylane/gradients/general_shift_rules.py Show resolved Hide resolved
pennylane/gradients/general_shift_rules.py Show resolved Hide resolved
pennylane/tape/qscript.py Show resolved Hide resolved
mudit2812 and others added 2 commits July 7, 2023 12:08
Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
@mudit2812 mudit2812 requested review from timmysilv and a team July 7, 2023 16:29
Copy link
Contributor

@frederikwilde frederikwilde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

pennylane/gradients/general_shift_rules.py Show resolved Hide resolved
pennylane/tape/qscript.py Show resolved Hide resolved
Copy link
Contributor

@timmysilv timmysilv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 amazing!

@mudit2812 mudit2812 enabled auto-merge (squash) July 10, 2023 20:16
@mudit2812 mudit2812 disabled auto-merge July 10, 2023 20:16
eddddddy and others added 7 commits July 10, 2023 16:17
* Remove statevector support for qinfo functions

* remove unused funcs

* Fix some tests

* pylint

* more pylint

* Remove unnecessary log

* Changelog and deprecations entry

* trigger ci
* rename all legacy test files to match pylint pattern

* pylint all tests in CI and pre-commit

* lint legacy/qnn/conftest

* remove the custom pylint test handling

* run black before pylint

* changelog

---------

Co-authored-by: David Wierichs <david.wierichs@xanadu.ai>
* use bind_new_parameters

* pylint

* remove batching in measurements and add tests

* More coverage issues

* Add uncopied tests

* Helper function

* pylint

* Make tape._ops public

* use private methods

* pylint

* Add more docs to split_operations

* pylint
* update in docs

* update changelog

* Test warning is raised

* update default gaussian device

* update tests

* update more tests

* fix legacy tests

* Specify v0.33 removal in docstring

* Render X and P in docs

* move deprecation warning in docstring

* fix sphinx linking

* add warning box
* Support wire labels in qinfo transforms

* changelog

* pylint and update test

* add bugfix entry
* deprecate the old return system

* deprecate the mode kwarg for QNode

* changelog

* PR feedback

* update notice to avoid wrongly suggesting action needed

* update docstrings, docs and warnings

* add link to qnode returns doc

* change the mode warning depending on return system active

* also add disclaimer to docstring
@mudit2812 mudit2812 enabled auto-merge (squash) July 10, 2023 20:20
@mudit2812 mudit2812 merged commit 58c01e5 into master Jul 11, 2023
42 checks passed
@mudit2812 mudit2812 deleted the grad_mutate branch July 11, 2023 13:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
WIP 🚧 Work-in-progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants